The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Artificial Intelligence (AI) is having a tremendous impact across most areas of science. Applications of AI in healthcare have the potential to improve our ability to detect, diagnose, prognose, and intervene on human disease. For AI models to be used clinically, they need to be made safe, reproducible and robust, and the underlying software framework must be aware of the particularities (e.g. geometry, physiology, physics) of medical data being processed. This work introduces MONAI, a freely available, community-supported, and consortium-led PyTorch-based framework for deep learning in healthcare. MONAI extends PyTorch to support medical data, with a particular focus on imaging, and provide purpose-specific AI model architectures, transformations and utilities that streamline the development and deployment of medical AI models. MONAI follows best practices for software-development, providing an easy-to-use, robust, well-documented, and well-tested software framework. MONAI preserves the simple, additive, and compositional approach of its underlying PyTorch libraries. MONAI is being used by and receiving contributions from research, clinical and industrial teams from around the world, who are pursuing applications spanning nearly every aspect of healthcare.
translated by 谷歌翻译
Federated learning (FL) enables the building of robust and generalizable AI models by leveraging diverse datasets from multiple collaborators without centralizing the data. We created NVIDIA FLARE as an open-source software development kit (SDK) to make it easier for data scientists to use FL in their research and real-world applications. The SDK includes solutions for state-of-the-art FL algorithms and federated machine learning approaches, which facilitate building workflows for distributed learning across enterprises and enable platform developers to create a secure, privacy-preserving offering for multiparty collaboration utilizing homomorphic encryption or differential privacy. The SDK is a lightweight, flexible, and scalable Python package, and allows researchers to bring their data science workflows implemented in any training libraries (PyTorch, TensorFlow, XGBoost, or even NumPy) and apply them in real-world FL settings. This paper introduces the key design principles of FLARE and illustrates some use cases (e.g., COVID analysis) with customizable FL workflows that implement different privacy-preserving algorithms. Code is available at https://github.com/NVIDIA/NVFlare.
translated by 谷歌翻译
对于工业规模的广告系统,对广告点击率(CTR)的预测是一个核心问题。广告点击构成了一类重要的用户参与,通常用作广告对用户有用的主要信号。此外,在每次点击收费的广告系统中,单击费用期望值直接输入价值估计。因此,对于大多数互联网广告公司而言,CTR模型开发是一项重大投资。此类问题的工程需要许多适合在线学习的机器学习(ML)技术,这些技术远远超出了传统的准确性改进,尤其是有关效率,可重复性,校准,信用归因。我们介绍了Google搜索广告CTR模型中部署的实用技术的案例研究。本文提供了一项行业案例研究,该研究强调了当前的ML研究的重要领域,并说明了如何评估有影响力的新ML方法并在大型工业环境中有用。
translated by 谷歌翻译
在过去的几十年中,对生物启发的智能及其对机器人技术的应用非常关注。本文对生物启发的智能进行了全面的调查,重点是神经动力学方法,尤其是对自主机器人系统的路径计划和控制。首先,引入了以生物启发的分流模型及其变体(添加剂模型和门控偶极模型),并详细介绍其主要特征。然后,回顾了实时路径计划和各种机器人系统控制的两个主要神经动力学应用。一个以神经动力学模型为特征的生物启发的神经网络框架,用于移动机器人,清洁机器人和水下机器人。生物启发的神经网络已在无碰撞导航和合作中广泛使用,没有任何学习程序,全球成本功能以及动态环境的先验知识。此外,还进一步讨论了针对各种机器人系统的生物启发的后台控制器,这些控制器能够在发生较大的初始跟踪误差时消除速度跳跃。最后,本文讨论了当前的挑战和未来的研究方向。
translated by 谷歌翻译
人表皮生长因子受体2(HER2)生物标志物的免疫组织化学(IHC)染色在乳腺组织分析,临床前研究和诊断决策中广泛实践,指导癌症治疗和发病机制调查。 HER2染色需要由组织医学表演表演的艰苦组织处理和化学处理,这通常需要一天,以便在实验室中准备,增加分析时间和相关成本。在这里,我们描述了一种基于深度学习的虚拟HER2 IHC染色方法,其使用条件生成的对抗网络培训,训练以便将未标记/标记的乳房组织部分的自发荧光显微镜图像快速转化为明亮场当量的显微镜图像,匹配标准HER2 IHC染色在相同的组织部分上进行化学进行。通过定量分析证明了这一虚拟HER2染色框架的功效,其中三个董事会认证的乳房病理学家盲目地评级了HER2的几乎染色和免疫化化学染色的HER2整个幻灯片图像(WSIS),揭示了通过检查虚拟来确定的HER2分数IHC图像与其免疫组织化学染色的同类一样准确。通过相同的诊断师进行的第二种定量盲化研究进一步揭示了几乎染色的HER2图像在核细节,膜清晰度和染色伪像相对于其免疫组织化学染色的对应物的染色伪影等级具有相当的染色质量。这种虚拟HER2染色框架在实验室中绕过了昂贵,费力,耗时耗时的IHC染色程序,并且可以扩展到其他类型的生物标志物,以加速生命科学和生物医学工作流程的IHC组织染色。
translated by 谷歌翻译
从有限角度范围内获取的X射线投影的计算机断层扫描(CT)重建是具有挑战性的,特别是当角度范围非常小时。分析和迭代模型都需要更多的投影来有效建模。由于其出色的重建性能,深度学习方法已经取得了普遍存在,但此类成功主要限制在同一数据集中,并且在具有不同分布的数据集中不概括。在此,我们通过引入铭顶推销模块来提出用于有限角度CT重建的外推网,这是理论上的合理的。该模块补充了额外的铭顶信息和靴子型号概括性。广泛的实验结果表明,我们的重建模型在NIH-AAPM数据集上实现了最先进的性能,类似于现有方法。更重要的是,我们表明,与现有方法相比,使用这种Sinogram推断模块显着提高了在未经持续数据集(例如,Covid-19和LIDC数据集)上的模型的泛化能力。
translated by 谷歌翻译
Existing automated techniques for software documentation typically attempt to reason between two main sources of information: code and natural language. However, this reasoning process is often complicated by the lexical gap between more abstract natural language and more structured programming languages. One potential bridge for this gap is the Graphical User Interface (GUI), as GUIs inherently encode salient information about underlying program functionality into rich, pixel-based data representations. This paper offers one of the first comprehensive empirical investigations into the connection between GUIs and functional, natural language descriptions of software. First, we collect, analyze, and open source a large dataset of functional GUI descriptions consisting of 45,998 descriptions for 10,204 screenshots from popular Android applications. The descriptions were obtained from human labelers and underwent several quality control mechanisms. To gain insight into the representational potential of GUIs, we investigate the ability of four Neural Image Captioning models to predict natural language descriptions of varying granularity when provided a screenshot as input. We evaluate these models quantitatively, using common machine translation metrics, and qualitatively through a large-scale user study. Finally, we offer learned lessons and a discussion of the potential shown by multimodal models to enhance future techniques for automated software documentation.
translated by 谷歌翻译
Deep learning models can achieve high accuracy when trained on large amounts of labeled data. However, real-world scenarios often involve several challenges: Training data may become available in installments, may originate from multiple different domains, and may not contain labels for training. Certain settings, for instance medical applications, often involve further restrictions that prohibit retention of previously seen data due to privacy regulations. In this work, to address such challenges, we study unsupervised segmentation in continual learning scenarios that involve domain shift. To that end, we introduce GarDA (Generative Appearance Replay for continual Domain Adaptation), a generative-replay based approach that can adapt a segmentation model sequentially to new domains with unlabeled data. In contrast to single-step unsupervised domain adaptation (UDA), continual adaptation to a sequence of domains enables leveraging and consolidation of information from multiple domains. Unlike previous approaches in incremental UDA, our method does not require access to previously seen data, making it applicable in many practical scenarios. We evaluate GarDA on two datasets with different organs and modalities, where it substantially outperforms existing techniques.
translated by 谷歌翻译
Graph Neural Networks (GNNs) have shown satisfying performance on various graph learning tasks. To achieve better fitting capability, most GNNs are with a large number of parameters, which makes these GNNs computationally expensive. Therefore, it is difficult to deploy them onto edge devices with scarce computational resources, e.g., mobile phones and wearable smart devices. Knowledge Distillation (KD) is a common solution to compress GNNs, where a light-weighted model (i.e., the student model) is encouraged to mimic the behavior of a computationally expensive GNN (i.e., the teacher GNN model). Nevertheless, most existing GNN-based KD methods lack fairness consideration. As a consequence, the student model usually inherits and even exaggerates the bias from the teacher GNN. To handle such a problem, we take initial steps towards fair knowledge distillation for GNNs. Specifically, we first formulate a novel problem of fair knowledge distillation for GNN-based teacher-student frameworks. Then we propose a principled framework named RELIANT to mitigate the bias exhibited by the student model. Notably, the design of RELIANT is decoupled from any specific teacher and student model structures, and thus can be easily adapted to various GNN-based KD frameworks. We perform extensive experiments on multiple real-world datasets, which corroborates that RELIANT achieves less biased GNN knowledge distillation while maintaining high prediction utility.
translated by 谷歌翻译